Sydney | $180k + Super
If you are the type of engineer who enjoys understanding how data moves through systems - pipelines, warehouses, storage layers and infrastructure - this role may interest you.
You would be joining a small B2B software company where data sits close to the core of the product. The team is technical, curious and hands-on. There is no management layer here - you will work directly with the CTO and other engineers to shape how data flows through the organisation.
Your focus will be building and evolving the company’s data platform.
That includes:
Designing and improving data pipelines
Working with AWS data infrastructure
Managing and optimising the Redshift warehouse
Structuring and modelling messy source data
Ensuring data moves reliably across systems
Supporting the reporting and analytics layer
This role suits someone who enjoys building systems and solving complex data problems rather than managing people.
You would be working with a stack that currently includes:
Python
SQL
AWS
RDS (Postgres)
Redshift
AWS Glue
Terraform
Some legacy MySQL
QuickSight today, with interest in Power BI
The engineering team is also experimenting with AI-assisted development and internal tooling that allows safe exploration of replicated data environments.
Tools are useful, but mindset matters more.
You will likely enjoy this role if you:
Like understanding how data flows through systems
Enjoy fixing and improving data pipelines
Take pride in building reliable infrastructure
Prefer hands-on engineering rather than management
Are naturally curious about how systems can be improved
You would be joining a small, highly capable engineering team that values curiosity, technical depth and thoughtful problem solving.
This role tends to suit engineers who enjoy tackling complex technical problems but prefer working in a smaller environment where they can have meaningful ownership of the systems they build.
Sydney based - Hybrid work environment.
Australian working rights essential.
Apply by sending your CV?